Name | Version | Summary | date |
audiolm-superfeel |
2.1.1 |
AudioLM - Language Modeling Approach to Audio Generation from Google Research - Pytorch |
2024-05-04 10:34:49 |
soundstorm-superfeel |
0.4.5 |
SoundStorm - Efficient Parallel Audio Generation from Google Deepmind, in Pytorch |
2024-05-04 10:33:55 |
self-reasoning-tokens-pytorch |
0.0.2 |
Self Reasoning Tokens |
2024-05-03 14:38:09 |
x-transformers |
1.28.5 |
X-Transformers - Pytorch |
2024-05-03 13:25:39 |
MEGABYTE-pytorch |
0.3.0 |
MEGABYTE - Pytorch |
2024-05-03 02:14:25 |
autoawq |
0.2.5 |
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. |
2024-05-02 18:32:41 |
infini-transformer-pytorch |
0.0.9 |
Infini-Transformer in Pytorch |
2024-05-02 16:34:43 |
datadreamer.dev |
0.35.0 |
Prompt. Generate Synthetic Data. Train & Align Models. |
2024-05-02 05:36:30 |
sibila |
0.4.1 |
Structured queries from local or online LLM models |
2024-04-29 17:51:25 |
REaLTabFormer |
0.1.7 |
A novel method for generating tabular and relational data using language models. |
2024-04-28 18:00:11 |
querent |
3.0.8 |
The Asynchronous Data Dynamo and Graph Neural Network Catalyst |
2024-04-28 02:03:57 |
s5-pytorch |
0.2.1 |
S5 - Simplified State Space Layers for Sequence Modeling - Pytorch |
2024-04-26 09:39:13 |
nncf |
2.10.0 |
Neural Networks Compression Framework |
2024-04-25 12:01:53 |
gliner |
0.1.12 |
Generalist model for NER (Extract any entity types from texts) |
2024-04-24 10:26:04 |
deformable-attention |
0.0.19 |
Deformable Attention - from the paper "Vision Transformer with Deformable Attention" |
2024-04-23 23:45:52 |
BS-RoFormer |
0.4.1 |
BS-RoFormer - Band-Split Rotary Transformer for SOTA Music Source Separation |
2024-04-21 16:44:28 |
graph-transformer |
0.2.1 |
This is the implementation of Graph Transformer (https://www.ijcai.org/proceedings/2021/0214.pdf) |
2024-04-20 11:09:13 |
zeldarose |
0.9.0 |
Train transformer-based models |
2024-04-17 15:36:46 |
trl |
0.8.4 |
Train transformer language models with reinforcement learning. |
2024-04-17 15:16:50 |
inseq |
0.6.0 |
Interpretability for Sequence Generation Models 🔍 |
2024-04-13 13:37:37 |